174 research outputs found

    A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    Full text link
    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter yy. The performance parameter yy is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of yy. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithm, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo method

    A subset multicanonical Monte Carlo method for simulating rare failure events

    Full text link
    Estimating failure probabilities of engineering systems is an important problem in many engineering fields. In this work we consider such problems where the failure probability is extremely small (e.g ≀10βˆ’10\leq10^{-10}). In this case, standard Monte Carlo methods are not feasible due to the extraordinarily large number of samples required. To address these problems, we propose an algorithm that combines the main ideas of two very powerful failure probability estimation approaches: the subset simulation (SS) and the multicanonical Monte Carlo (MMC) methods. Unlike the standard MMC which samples in the entire domain of the input parameter in each iteration, the proposed subset MMC algorithm adaptively performs MMC simulations in a subset of the state space and thus improves the sampling efficiency. With numerical examples we demonstrate that the proposed method is significantly more efficient than both of the SS and the MMC methods. Moreover, the proposed algorithm can reconstruct the complete distribution function of the parameter of interest and thus can provide more information than just the failure probabilities of the systems

    A Derivative-Free Trust-Region Algorithm for Reliability-Based Optimization

    Full text link
    In this note, we present a derivative-free trust-region (TR) algorithm for reliability based optimization (RBO) problems. The proposed algorithm consists of solving a set of subproblems, in which simple surrogate models of the reliability constraints are constructed and used in solving the subproblems. Taking advantage of the special structure of the RBO problems, we employ a sample reweighting method to evaluate the failure probabilities, which constructs the surrogate for the reliability constraints by performing only a single full reliability evaluation in each iteration. With numerical experiments, we illustrate that the proposed algorithm is competitive against existing methods

    Gaussian process surrogates for failure detection: a Bayesian experimental design approach

    Full text link
    An important task of uncertainty quantification is to identify {the probability of} undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian {process} surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples

    On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inferences

    Get PDF
    Many scientific and engineering problems require to perform Bayesian inferences for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method

    A hybrid adaptive MCMC algorithm in function spaces

    Full text link
    The preconditioned Crank-Nicolson (pCN) method is a Markov Chain Monte Carlo (MCMC) scheme, specifically designed to perform Bayesian inferences in function spaces. Unlike many standard MCMC algorithms, the pCN method can preserve the sampling efficiency under the mesh refinement, a property referred to as being dimension independent. In this work we consider an adaptive strategy to further improve the efficiency of pCN. In particular we develop a hybrid adaptive MCMC method: the algorithm performs an adaptive Metropolis scheme in a chosen finite dimensional subspace, and a standard pCN algorithm in the complement space of the chosen subspace. We show that the proposed algorithm satisfies certain important ergodicity conditions. Finally with numerical examples we demonstrate that the proposed method has competitive performance with existing adaptive algorithms.Comment: arXiv admin note: text overlap with arXiv:1511.0583

    On Estimating the Gradient of the Expected Information Gain in Bayesian Experimental Design

    Full text link
    Bayesian Experimental Design (BED), which aims to find the optimal experimental conditions for Bayesian inference, is usually posed as to optimize the expected information gain (EIG). The gradient information is often needed for efficient EIG optimization, and as a result the ability to estimate the gradient of EIG is essential for BED problems. The primary goal of this work is to develop methods for estimating the gradient of EIG, which, combined with the stochastic gradient descent algorithms, result in efficient optimization of EIG. Specifically, we first introduce a posterior expected representation of the EIG gradient with respect to the design variables. Based on this, we propose two methods for estimating the EIG gradient, UEEG-MCMC that leverages posterior samples generated through Markov Chain Monte Carlo (MCMC) to estimate the EIG gradient, and BEEG-AP that focuses on achieving high simulation efficiency by repeatedly using parameter samples. Theoretical analysis and numerical studies illustrate that UEEG-MCMC is robust agains the actual EIG value, while BEEG-AP is more efficient when the EIG value to be optimized is small. Moreover, both methods show superior performance compared to several popular benchmarks in our numerical experiments

    Entropy estimation via uniformization

    Get PDF
    • …
    corecore